Goto

Collaborating Authors

 welfare analysis


When Should Neural Data Inform Welfare? A Critical Framework for Policy Uses of Neuroeconomics

Yiven, null, Zhu, null

arXiv.org Artificial Intelligence

Neuroeconomics promises to ground welfare analysis in neural and computational evidence about how people value outcomes, learn from experience and exercise self-control. At the same time, policy and commercial actors increasingly invoke neural data to justify paternalistic regulation, "brain-based" interventions and new welfare measures. This paper asks under what conditions neural data can legitimately inform welfare judgements for policy rather than merely describing behaviour. I develop a non-empirical, model-based framework that links three levels: neural signals, computational decision models and normative welfare criteria. Within an actor-critic reinforcement-learning model, I formalise the inference path from neural activity to latent values and prediction errors and then to welfare claims. I show that neural evidence constrains welfare judgements only when the neural-computational mapping is well validated, the decision model identifies "true" interests versus context-dependent mistakes, and the welfare criterion is explicitly specified and defended. Applying the framework to addiction, neuromarketing and environmental policy, I derive a Neuroeconomic Welfare Inference Checklist for regulators and for designers of NeuroAI systems. The analysis treats brains and artificial agents as value-learning systems while showing that internal reward signals, whether biological or artificial, are computational quantities and cannot be treated as welfare measures without an explicit normative model.


Reviews: Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making

Neural Information Processing Systems

This paper proposes a new measure of fairness for classification and regression problems based on welfare considerations rather than inequality considerations. This measure of fairness represents a convex constraint, making it easy to optimize for. They experimentally demonstrate the tradeoffs between this notion of fairness and previous notions. I believe this to be a pretty valuable submission. A welfare-based approach over a inequality-based approach should turn out to be very helpful in addressing all sorts of concerns with the current literature. It also provokes a number of questions to follow up on which, while disappointing that they are not addressed here, means that the community should take interest in this paper.


Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making

Heidari, Hoda, Ferrari, Claudio, Gummadi, Krishna, Krause, Andreas

Neural Information Processing Systems

We draw attention to an important, yet largely overlooked aspect of evaluating fairness for automated decision making systems---namely risk and welfare considerations. Our proposed family of measures corresponds to the long-established formulations of cardinal social welfare in economics, and is justified by the Rawlsian conception of fairness behind a veil of ignorance. The convex formulation of our welfare-based measures of fairness allows us to integrate them as a constraint into any convex loss minimization pipeline. Our empirical analysis reveals interesting trade-offs between our proposal and (a) prediction accuracy, (b) group discrimination, and (c) Dwork et al's notion of individual fairness. Furthermore and perhaps most importantly, our work provides both heuristic justification and empirical evidence suggesting that a lower-bound on our measures often leads to bounded inequality in algorithmic outcomes; hence presenting the first computationally feasible mechanism for bounding individual-level inequality.